Goto

Collaborating Authors

 row detection


RowDetr: End-to-End Row Detection Using Polynomials

Cheppally, Rahul Harsha, Sharda, Ajay

arXiv.org Artificial Intelligence

The application of autonomous robots in high-throughput phenotyping has experienced a significant surge in recent years, driven by the need for precision and efficiency in agricultural tasks [1]. These advanced robotic systems are transforming the field by automating the complex task of phenotyping with unparalleled accuracy. While autonomous solutions in agriculture have been explored for decades [2], recent advancements have pushed the boundaries of this technology, particularly in addressing challenges related to GPS-denied navigation in dense crop environments. For nearly two decades, GPS-based autonomy has been the cornerstone of agricultural robotics. Studies such as [3] and [4] have showcased the use of RTK and GPS-based systems to guide tractors and harvesters with high precision. The GPS coordinates of rows can either be determined during planting or estimated using systems like [5, 6], which provide accurate crop and row location data.


Deep learning-based Crop Row Detection for Infield Navigation of Agri-Robots

de Silva, Rajitha, Cielniak, Grzegorz, Wang, Gang, Gao, Junfeng

arXiv.org Artificial Intelligence

Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing-based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.


Vision based Crop Row Navigation under Varying Field Conditions in Arable Fields

de Silva, Rajitha, Cielniak, Grzegorz, Gao, Junfeng

arXiv.org Artificial Intelligence

Accurate crop row detection is often challenged by the varying field conditions present in real-world arable fields. Traditional colour based segmentation is unable to cater for all such variations. The lack of comprehensive datasets in agricultural environments limits the researchers from developing robust segmentation models to detect crop rows. We present a dataset for crop row detection with 11 field variations from Sugar Beet and Maize crops. We also present a novel crop row detection algorithm for visual servoing in crop row fields. Our algorithm can detect crop rows against varying field conditions such as curved crop rows, weed presence, discontinuities, growth stages, tramlines, shadows and light levels. Our method only uses RGB images from a front-mounted camera on a Husky robot to predict crop rows. Our method outperformed the classic colour based crop row detection baseline. Dense weed presence within inter-row space and discontinuities in crop rows were the most challenging field conditions for our crop row detection algorithm. Our method can detect the end of the crop row and navigate the robot towards the headland area when it reaches the end of the crop row.


Towards Infield Navigation: leveraging simulated data for crop row detection

de Silva, Rajitha, Cielniak, Grzegorz, Gao, Junfeng

arXiv.org Artificial Intelligence

Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small real-world datasets along with additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60% less labelled real-world data. Our model performed well against field variations such as shadows, sunlight and grow stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various real-world field scenarios.


Multispectral Vineyard Segmentation: A Deep Learning approach

Barros, T., Conde, P., Gonçalves, G., Premebida, C., Monteiro, M., Ferreira, C. S. S., Nunes, U. J.

arXiv.org Artificial Intelligence

Digital agriculture has evolved significantly over the last few years due to the technological developments in automation and computational intelligence applied to the agricultural sector, including vineyards which are a relevant crop in the Mediterranean region. In this work, a study is presented of semantic segmentation for vine detection in real-world vineyards by exploring state-of-the-art deep segmentation networks and conventional unsupervised methods. Camera data have been collected on vineyards using an Unmanned Aerial System (UAS) equipped with a dual imaging sensor payload, namely a high-definition RGB camera and a five-band multispectral and thermal camera. Extensive experiments using deep-segmentation networks and unsupervised methods have been performed on multimodal datasets representing four distinct vineyards located in the central region of Portugal. The reported results indicate that SegNet, U-Net, and ModSegNet have equivalent overall performance in vine segmentation. The results also show that multimodality slightly improves the performance of vine segmentation, but the NIR spectrum alone generally is sufficient on most of the datasets. Furthermore, results suggest that high-definition RGB images produce equivalent or higher performance than any lower resolution multispectral band combination. Lastly, Deep Learning (DL) networks have higher overall performance than classical methods. The code and dataset are publicly available at https://github.com/Cybonic/DL_vineyard_segmentation_study.git


Towards agricultural autonomy: crop row detection under varying field conditions using deep learning

de Silva, Rajitha, Cielniak, Grzegorz, Gao, Junfeng

arXiv.org Artificial Intelligence

T owards agricultural autonomy: crop row detection under varying field conditions using deep learning Rajitha de Silva 1, Grzegorz Cielniak 2 and Junfeng Gao 3 Abstract -- This paper presents a novel metric to evaluate the robustness of deep learning based semantic segmentation approaches for crop row detection under different field conditions encountered by a field robot. A dataset with ten main categories encountered under various field conditions was used for testing. The effect on these conditions on the angular accuracy of crop row detection was compared. A deep convolutional encoder decoder network is implemented to predict crop row masks using RGB input images. The predicted mask is then sent to a post processing algorithm to extract the crop rows. The deep learning model was found to be robust against shadows and growth stages of the crop while the performance was reduced under direct sunlight, increasing weed density, tramlines and discontinuities in crop rows when evaluated with the novel metric. I NTRODUCTION Computer vision algorithms has been identified as one of the key areas that need to be improved to promote the current agricultural systems[1]. Crop row detection is a key element in developing vision based navigation robots in agricultural robotics. Vision based crop row detection has been a popular research question in classical computer vision.